Partial MaxSAT (PMS) and Weighted PMS (WPMS) are two practical generalizations of the MaxSAT problem. In this paper, we propose a local search algorithm for these problems, called BandHS, which applies two multi-armed bandits to guide the search directions when escaping local optima. One bandit is combined with all the soft clauses to help the algorithm select to satisfy appropriate soft clauses, and the other bandit with all the literals in hard clauses to help the algorithm select appropriate literals to satisfy the hard clauses. These two bandits can improve the algorithm's search ability in both feasible and infeasible solution spaces. We further propose an initialization method for (W)PMS that prioritizes both unit and binary clauses when producing the initial solutions. Extensive experiments demonstrate the excellent performance and generalization capability of our proposed methods, that greatly boost the state-of-the-art local search algorithm, SATLike3.0, and the state-of-the-art SAT-based incomplete solver, NuWLS-c.
translated by 谷歌翻译
最大常见的诱导子图(MC)是广泛的现实应用程序的重要NP硬化问题。分支结合(BNB)是MCS的一类有效算法的基础,当发现该解决方案比到目前为止发现的最佳解决方案更好时,包括连续选择以匹配和修剪的顶点以匹配和修剪。选择要匹配的顶点的方法对于BNB的性能至关重要。在本文中,我们提出了一种新的值函数和一种用于加强学习定义新的顶点选择方法的混合选择策略,并为MCS提出了一种称为MCSPLITDAL的新的BNB算法。广泛的实验表明,MCSPLITDAL显着改善了当前最佳BNB算法,MCSPLIT+LL和MCSPLIT+RL。还进行了经验分析,以说明为什么新的价值函数和混合选择策略有效。
translated by 谷歌翻译
旅行推销员问题(TSP)是许多实用变体的经典NP-HARD组合优化问题。 Lin-Kernighan-Helsgaun(LKH)算法是TSP的最先进的本地搜索算法之一,LKH-3是LKH的强大扩展,可以解决许多TSP变体。 LKH和LKH-3都将一个候选人与每个城市相关联,以提高算法效率,并具有两种不同的方法,称为$ \ alpha $ - 计算和Popmusic,以决定候选人集。在这项工作中,我们首先提出了一种可变策略加强LKH(VSR-LKH)算法,该算法将三种强化学习方法(Q-Learning,SARSA和Monte Carlo)与LKH算法结合在一起,以解决TSP。我们进一步提出了一种称为VSR-LKH-3的新算法,该算法将可变策略强化学习方法与LKH-3结合在一起,用于典型的TSP变体,包括带有时间窗口(TSPTW)和彩色TSP(CTSP)的TSP。所提出的算法取代了LKH和LKH-3中的不灵活的遍历操作,并让算法学会通过增强学习在每个搜索步骤中做出选择。 LKH和LKH-3都具有$ \ alpha $量或Popmusic方法,我们的方法都可以显着改善。具体而言,对236个公共和广泛使用的TSP基准的经验结果具有多达85,900个城市,证明了VSR-LKH的出色表现,扩展的VSR-LKH-3也显着超过了TSPTW和TSPTW和TSPTW和TSPTW的最新启发式方法CTSP。
translated by 谷歌翻译
我们解决了部分MaxSat(PMS)和加权PMS(WPM),这是MaxSat问题的两个实际概括,并为这些问题(称为BandMaxSat)提出了一种局部搜索算法,该算法应用了多臂Bantit模型来指导搜索方向。我们方法中的匪徒与输入(w)pms实例中的所有软子句相关联。每个手臂对应于软子句。 Bandit模型可以通过选择要在当前步骤中满足的软子句,即选择要拉的臂来帮助BandmaxSat选择一个良好的方向以逃脱本地Optima。我们进一步提出了一种初始化方法(w)PMS,在生产初始解决方案时优先考虑单元和二进制条款。广泛的实验表明,BandMaxSat显着优于最先进的(W)PMS本地搜索算法SATLIKE3.0。具体而言,BandMaxSat获得更好结果的实例数量大约是Satlike3.0获得的两倍。此外,我们将bandmaxsat与完整的求解器tt-open-wbo-inc相结合。最终的求解器bandmaxsat-c还胜过一些最好的最新完整(W)PMS求解器,包括satlike-c,loandra和tt-open-wbo-inc。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译